Dual first-order methods are powerful techniques for large-scale convexoptimization. Although an extensive research effort has been devoted tostudying their convergence properties, explicit convergence rates for theprimal iterates have only been established under global Lipschitz continuity ofthe dual gradient. This is a rather restrictive assumption that does not holdfor several important classes of problems. In this paper, we demonstrate thatprimal convergence rate guarantees can also be obtained when the dual gradientis only locally Lipschitz. The class of problems that we analyze admits generalconvex constraints including nonlinear inequality, linear equality, and setconstraints. As an approximate primal solution, we take the minimizer of theLagrangian, computed when evaluating the dual gradient. We derive error boundsfor this approximate primal solution in terms of the errors of the dualvariables, and establish convergence rates of the dual variables when the dualproblem is solved using a projected gradient or fast gradient method. Bycombining these results, we show that the suboptimality and infeasibility ofthe approximate primal solution at iteration $k$ are no worse than$O(1/\sqrt{k})$ when the dual problem is solved using a projected gradientmethod, and $O(1/k)$ when a fast dual gradient method is used.
展开▼